causal direction
Multi-domain Causal Structure Learning in Linear Systems
We study the problem of causal structure learning in linear systems from observational data given in multiple domains, across which the causal coefficients and/or the distribution of the exogenous noises may vary. The main tool used in our approach is the principle that in a causally sufficient system, the causal modules, as well as their included parameters, change independently across domains. We first introduce our approach for finding causal direction in a system comprising two variables and propose efficient methods for identifying causal direction. Then we generalize our methods to causal structure learning in networks of variables. Most of previous work in structure learning from multi-domain data assume that certain types of invariance are held in causal modules across domains. Our approach unifies the idea in those works and generalizes to the case that there is no such invariance across the domains. Our proposed methods are generally capable of identifying causal direction from fewer than ten domains. When the invariance property holds, two domains are generally sufficient.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- North America > Canada (0.04)
- (3 more...)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada (0.04)
- Asia > China > Guangdong Province > Shantou (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.05)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Hong Kong (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.50)
Step-by-Step Causality: Transparent Causal Discovery with Multi-Agent Tree-Query and Adversarial Confidence Estimation
Ding, Ziyi, Ye-Hao, Chenfei, Wang, Zheyuan, Zhang, Xiao-Ping
Causal discovery aims to recover ``what causes what'', but classical constraint-based methods (e.g., PC, FCI) suffer from error propagation, and recent LLM-based causal oracles often behave as opaque, confidence-free black boxes. This paper introduces Tree-Query, a tree-structured, multi-expert LLM framework that reduces pairwise causal discovery to a short sequence of queries about backdoor paths, (in)dependence, latent confounding, and causal direction, yielding interpretable judgments with robustness-aware confidence scores. Theoretical guarantees are provided for asymptotic identifiability of four pairwise relations. On data-free benchmarks derived from Mooij et al. and UCI causal graphs, Tree-Query improves structural metrics over direct LLM baselines, and a diet--weight case study illustrates confounder screening and stable, high-confidence causal conclusions. Tree-Query thus offers a principled way to obtain data-free causal priors from LLMs that can complement downstream data-driven causal discovery. Code is available at https://anonymous.4open.science/r/Repo-9B3E-4F96.
On the Practical Estimation and Interpretation of Rényi Transfer Entropy
Tabachová, Zlata, Jizba, Petr, Lavička, Hynek, Paluš, Milan
Rényi transfer entropy (RTE) is a generalization of classical transfer entropy that replaces Shannon's entropy with Rényi's information measure. This, in turn, introduces a new tunable parameter $α$, which accounts for sensitivity to low- or high-probability events. Although RTE shows strong potential for analyzing causal relations in complex, non-Gaussian systems, its practical use is limited, primarily due to challenges related to its accurate estimation and interpretation. These difficulties are especially pronounced when working with finite, high-dimensional, or heterogeneous datasets. In this paper, we systematically study the performance of a k-nearest neighbor estimator for both Rényi entropy (RE) and RTE using various synthetic data sets with clear cause-and-effect relationships inherent to their construction. We test the estimator across a broad range of parameters, including sample size, dimensionality, memory length, and Rényi order $α$. In particular, we apply the estimator to a set of simulated processes with increasing structural complexity, ranging from linear dynamics to nonlinear systems with multi-source couplings. To address interpretational challenges arising from potentially negative RE and RTE values, we introduce three reliability conditions and formulate practical guidelines for tuning the estimator parameters. We show that when the reliability conditions are met and the parameters are calibrated accordingly, the resulting effective RTE estimates accurately capture directional information flow across a broad range of scenarios. Results obtained show that the explanatory power of RTE depends sensitively on the choice of the Rényi parameter $α$. This highlights the usefulness of the RTE framework for identifying the drivers of extreme behavior in complex systems.
- Europe > Czechia > Prague (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (5 more...)
Entropic Causal Inference: Identifiability and Finite Sample Results
Entropic causal inference is a framework for inferring the causal direction between two categorical variables from observational data. The central assumption is that the amount of unobserved randomness in the system is not too large. This unobserved randomness is measured by the entropy of the exogenous variable in the underlying structural causal model, which governs the causal relation between the observed variables. Kocaoglu et al. conjectured that the causal direction is identifiable when the entropy of the exogenous variable is not too large. In this paper, we prove a variant of their conjecture.